Goto

Collaborating Authors

 weight modification




Supplementary Material: Simultaneous embedding of multiple attractor manifolds in a recurrent neural network using constrained gradient optimization

Neural Information Processing Systems

The dynamics of neural activity are described by a standard rate model. Note that only the third term of Eq. 'th place cell preferred firing position in the's are standard unit vectors spanning an orthonormal basis. To derive Eq. 3 we evaluate the derivative of Energy landscapes were uniformly shifted throughout the manuscript by a constant (Figs. For each network with a different number of total embedded maps, 15 realizations were performed in which the permutations between the spatial maps were chosen independently and at random. Code availability Code is available at public repository https://doi.org/10.5281/zenodo.10016179.



CrAM: Credibility-Aware Attention Modification in LLMs for Combating Misinformation in RAG

Deng, Boyi, Wang, Wenjie, Zhu, Fengbin, Wang, Qifan, Feng, Fuli

arXiv.org Artificial Intelligence

Retrieval-Augmented Generation (RAG) can alleviate hallucinations of Large Language Models (LLMs) by referencing external documents. However, the misinformation in external documents may mislead LLMs' generation. To address this issue, we explore the task of "credibility-aware RAG", in which LLMs automatically adjust the influence of retrieved documents based on their credibility scores to counteract misinformation. To this end, we introduce a plug-and-play method named $\textbf{Cr}$edibility-aware $\textbf{A}$ttention $\textbf{M}$odification (CrAM). CrAM identifies influential attention heads in LLMs and adjusts their attention weights based on the credibility of the documents, thereby reducing the impact of low-credibility documents. Experiments on Natual Questions and TriviaQA using Llama2-13B, Llama3-8B, and Qwen-7B show that CrAM improves the RAG performance of LLMs against misinformation pollution by over 20%, even surpassing supervised fine-tuning methods.


Do You Remember? Overcoming Catastrophic Forgetting for Fake Audio Detection

Zhang, Xiaohui, Yi, Jiangyan, Tao, Jianhua, Wang, Chenglong, Zhang, Chuyuan

arXiv.org Artificial Intelligence

Current fake audio detection algorithms have achieved promising performances on most datasets. However, their performance may be significantly degraded when dealing with audio of a different dataset. The orthogonal weight modification to overcome catastrophic forgetting does not consider the similarity of genuine audio across different datasets. To overcome this limitation, we propose a continual learning algorithm for fake audio detection to overcome catastrophic forgetting, called Regularized Adaptive Weight Modification (RAWM). When fine-tuning a detection network, our approach adaptively computes the direction of weight modification according to the ratio of genuine utterances and fake utterances. The adaptive modification direction ensures the network can effectively detect fake audio on the new dataset while preserving its knowledge of old model, thus mitigating catastrophic forgetting. In addition, genuine audio collected from quite different acoustic conditions may skew their feature distribution, so we introduce a regularization constraint to force the network to remember the old distribution in this regard. Our method can easily be generalized to related fields, like speech emotion recognition. We also evaluate our approach across multiple datasets and obtain a significant performance improvement on cross-dataset experiments.